34 research outputs found

    Towards Advanced Interactive Visualization for Virtual Atlases

    Get PDF
    Under embargo until: 2020-07-24An atlas is generally defined as a bound collection of tables, charts or illustrations describing a phenomenon. In an anatomical atlas for example, a collection of representative illustrations and text describes anatomy for the purpose of communicating anatomical knowledge. The atlas serves as reference frame for comparing and integrating data from different sources by spatially or semantically relating collections of drawings, imaging data, and/or text. In the field of medical image processing, atlas information is often constructed from a collection of regions of interest, which are based on medical images that are annotated by domain experts. Such an atlas may be employed, for example, for automatic segmentation of medical imaging data. The combination of interactive visualization techniques with atlas information opens up new possibilities for content creation, curation, and navigation in virtual atlases. With interactive visualization of atlas information, students are able to inspect and explore anatomical atlases in ways that were not possible with the traditional method of presenting anatomical atlases in book format, such as viewing the illustrations from other viewpoints. With advanced interaction techniques, it becomes possible to query the data that forms the basis for the atlas, thus empowering researchers to access a wealth of information in new ways. So far, atlas-based visualization has been employed mainly for medical education, as well as biological research. In this survey, we provide an overview of current digital biomedical atlas tasks and applications and summarize relevant visualization techniques. We discuss recent approaches for providing next-generation visual interfaces to navigate atlas data that go beyond common text-based search and hierarchical lists. Finally, we reflect on open challenges and opportunities for the next steps in interactive atlas visualization.acceptedVersio

    Longitudinal visualization for exploratory analysis of multiple sclerosis lesions

    Get PDF
    In multiple sclerosis (MS), the amount of brain damage, anatomical location, shape, and changes are important aspects that help medical researchers and clinicians to understand the temporal patterns of the disease. Interactive visualization for longitudinal MS data can support studies aimed at exploratory analysis of lesion and healthy tissue topology. Existing visualizations in this context comprise bar charts and summary measures, such as absolute numbers and volumes to summarize lesion trajectories over time, as well as summary measures such as volume changes. These techniques can work well for datasets having dual time point comparisons. For frequent follow-up scans, understanding patterns from multimodal data is difficult without suitable visualization approaches. As a solution, we propose a visualization application, wherein we present lesion exploration tools through interactive visualizations that are suitable for large time-series data. In addition to various volumetric and temporal exploration facilities, we include an interactive stacked area graph with other integrated features that enable comparison of lesion features, such as intensity or volume change. We derive the input data for the longitudinal visualizations from automated lesion tracking. For cases with a larger number of follow-ups, our visualization design can provide useful summary information while allowing medical researchers and clinicians to study features at lower granularities. We demonstrate the utility of our visualization on simulated datasets through an evaluation with domain experts.publishedVersio

    Ten Open Challenges in Medical Visualization

    Get PDF
    The medical domain has been an inspiring application area in visualization research for many years already, but many open challenges remain. The driving forces of medical visualization research have been strengthened by novel developments, for example, in deep learning, the advent of affordable VR technology, and the need to provide medical visualizations for broader audiences. At IEEE VIS 2020, we hosted an Application Spotlight session to highlight recent medical visualization research topics. With this article, we provide the visualization community with ten such open challenges, primarily focused on challenges related to the visualization of medical imaging data. We first describe the unique nature of medical data in terms of data preparation, access, and standardization. Subsequently, we cover open visualization research challenges related to uncertainty, multimodal and multiscale approaches, and evaluation. Finally, we emphasize challenges related to users focusing on explainable AI, immersive visualization, P4 medicine, and narrative visualization.acceptedVersio

    MuSIC: Multi-Sequential Interactive Co-Registration for Cancer Imaging Data based on Segmentation Masks

    Get PDF
    In gynecologic cancer imaging, multiple magnetic resonance imaging (MRI) sequences are acquired per patient to reveal different tissue characteristics. However, after image acquisition, the anatomical structures can be misaligned in the various sequences due to changing patient location in the scanner and organ movements. The co-registration process aims to align the sequences to allow for multi-sequential tumor imaging analysis. However, automatic co-registration often leads to unsatisfying results. To address this problem, we propose the web-based application MuSIC (Multi-Sequential Interactive Co-registration). The approach allows medical experts to co-register multiple sequences simultaneously based on a pre-defined segmentation mask generated for one of the sequences. Our contributions lie in our proposed workflow. First, a shape matching algorithm based on dual annealing searches for the tumor position in each sequence. The user can then interactively adapt the proposed segmentation positions if needed. During this procedure, we include a multi-modal magic lens visualization for visual quality assessment. Then, we register the volumes based on the segmentation mask positions. We allow for both rigid and deformable registration. Finally, we conducted a usability analysis with seven medical and machine learning experts to verify the utility of our approach. Our participants highly appreciate the multi-sequential setup and see themselves using MuSIC in the future. Best Paper Honorable Mention at VCBM2022publishedVersio

    Interobserver agreement and prognostic impact for MRI–based 2018 FIGO staging parameters in uterine cervical cancer

    Get PDF
    Objectives To evaluate the interobserver agreement for MRI–based 2018 International Federation of Gynecology and Obstetrics (FIGO) staging parameters in patients with cervical cancer and assess the prognostic value of these MRI parameters in relation to other clinicopathological markers. Methods This retrospective study included 416 women with histologically confirmed cervical cancer who underwent pretreatment pelvic MRI from May 2002 to December 2017. Three radiologists independently recorded MRI–derived staging parameters incorporated in the 2018 FIGO staging system. Kappa coefficients (κ) for interobserver agreement were calculated. The predictive and prognostic values of the MRI parameters were explored using ROC analyses and Kaplan–Meier with log-rank tests, and analyzed in relation to clinicopathological patient characteristics. Results Overall agreement was substantial for the staging parameters: tumor size > 2 cm (κ = 0.80), tumor size > 4 cm (κ = 0.76), tumor size categories (≤ 2 cm; > 2 and ≤ 4 cm; > 4 cm) (κ = 0.78), parametrial invasion (κ = 0.63), vaginal invasion (κ = 0.61), and enlarged lymph nodes (κ = 0.63). Higher MRI–derived tumor size category (≤ 2 cm; > 2 and ≤ 4 cm; > 4 cm) was associated with a stepwise reduction in survival (p ≤ 0.001 for all). Tumor size > 4 cm and parametrial invasion at MRI were associated with aggressive clinicopathological features, and the incorporation of these MRI–based staging parameters improved risk stratification when compared to corresponding clinical assessments alone. Conclusion The interobserver agreement for central MRI–derived 2018 FIGO staging parameters was substantial. MRI improved the identification of patients with aggressive clinicopathological features and poor survival, demonstrating the potential impact of MRI enabling better prognostication and treatment tailoring in cervical cancer.publishedVersio

    Fully Automatic Whole-Volume Tumor Segmentation in Cervical Cancer

    Get PDF
    Uterine cervical cancer (CC) is the most common gynecologic malignancy worldwide. Whole-volume radiomic profiling from pelvic MRI may yield prognostic markers for tailoring treatment in CC. However, radiomic profiling relies on manual tumor segmentation which is unfeasible in the clinic. We present a fully automatic method for the 3D segmentation of primary CC lesions using state-of-the-art deep learning (DL) techniques. In 131 CC patients, the primary tumor was manually segmented on T2-weighted MRI by two radiologists (R1, R2). Patients were separated into a train/validation (n = 105) and a test- (n = 26) cohort. The segmentation performance of the DL algorithm compared with R1/R2 was assessed with Dice coefficients (DSCs) and Hausdorff distances (HDs) in the test cohort. The trained DL network retrieved whole-volume tumor segmentations yielding median DSCs of 0.60 and 0.58 for DL compared with R1 (DL-R1) and R2 (DL-R2), respectively, whereas DSC for R1-R2 was 0.78. Agreement for primary tumor volumes was excellent between raters (R1-R2: intraclass correlation coefficient (ICC) = 0.93), but lower for the DL algorithm and the raters (DL-R1: ICC = 0.43; DL-R2: ICC = 0.44). The developed DL algorithm enables the automated estimation of tumor size and primary CC tumor segmentation. However, segmentation agreement between raters is better than that between DL algorithm and raters.publishedVersio

    Towards Advanced Interactive Visualization for Virtual Atlases

    No full text
    An atlas is generally defined as a bound collection of tables, charts or illustrations describing a phenomenon. In an anatomical atlas for example, a collection of representative illustrations and text describes anatomy for the purpose of communicating anatomical knowledge. The atlas serves as reference frame for comparing and integrating data from different sources by spatially or semantically relating collections of drawings, imaging data, and/or text. In the field of medical image processing, atlas information is often constructed from a collection of regions of interest, which are based on medical images that are annotated by domain experts. Such an atlas may be employed, for example, for automatic segmentation of medical imaging data. The combination of interactive visualization techniques with atlas information opens up new possibilities for content creation, curation, and navigation in virtual atlases. With interactive visualization of atlas information, students are able to inspect and explore anatomical atlases in ways that were not possible with the traditional method of presenting anatomical atlases in book format, such as viewing the illustrations from other viewpoints. With advanced interaction techniques, it becomes possible to query the data that forms the basis for the atlas, thus empowering researchers to access a wealth of information in new ways. So far, atlas-based visualization has been employed mainly for medical education, as well as biological research. In this survey, we provide an overview of current digital biomedical atlas tasks and applications and summarize relevant visualization techniques. We discuss recent approaches for providing next-generation visual interfaces to navigate atlas data that go beyond common text-based search and hierarchical lists. Finally, we reflect on open challenges and opportunities for the next steps in interactive atlas visualization

    The Vitruvian Baby: Interactive Reformation of Fetal Ultrasound Data to a T-Position

    Get PDF
    Three-dimensional (3D) ultrasound imaging and visualization is often used in medical diagnostics, especially in prenatal screening. Screening the development of the organs of the fetus as well as the overall growth is important to assess possible complications early on. State of the art approaches involve taking standardized measurements to compare them with standardized tables. The measurements are taken in a 2D slice view where the fetal pose may complicate taking precise measurements. Performing the analysis in a 3D view would enable the viewer to better discriminate between artefacts and representative information. Making data comparable between different investigations and patients is a goal in medical imaging techniques and is often achieved by standardization, as is done in magnetic resonance imaging (MRI). With this paper, we introduce a novel approach to provide a standardization method for 3D ultrasound fetus screenings. Our approach is called ”The Vitruvian Baby” and incorporates a complete pipeline for standardized measuring in fetal 3D ultrasound. The input of the method is a 3D ultrasound screening of a fetus and the output is the fetus in a standardized T-pose. In this pose, taking measurements is easier and comparison of different fetuses is possible. In addition to the transformation of the 3D ultrasound data, we create an abstract representation of the fetus based on accurate measurements. We demonstrate the accuracy of our approach on simulated data where the ground truth is known.publishedVersio

    Illustrative multi-volume rendering for PET/CT scans

    No full text
    In this paper we present illustrative visualization techniques for PET/CT datasets. PET/CT scanners acquire both PET and CT image data in order to combine functional metabolic information with structural anatomical information. Current visualization techniques mainly rely on 2D image fusion techniques to convey this combined information to physicians. We introduce an illustrative 3D visualization technique, specifically designed for use with PET/CT datasets. This allows the user to easily detect foci in the PET data and to localize these regions by providing anatomical contextual information from the CT data. Furthermore, we provide transfer function specifically designed for PET data that facilitates the investigation of interesting regions. Our technique allows users to get a quick overview of regions of interest and can be used in treatment planning, doctor-patient communication and interdisciplinary communication. We conducted a qualitative evaluation with medical experts to validate the utility of our method in clinical practice

    The Role of Depth Perception in XR from a Neuroscience Perspective: A Primer and Survey

    Get PDF
    Augmented and virtual reality (XR) are potentially powerful tools for enhancing the efficiency of interactive visualization of complex data in biology and medicine. The benefits of visualization of digital objects in XR mainly arise from enhanced depth perception due to the stereoscopic nature of XR head mounted devices. With the added depth dimension, XR is in a prime position to convey complex information and support tasks where 3D information is important. In order to inform the development of novel XR applications in the biology and medicine domain, we present a survey which reviews the neuroscientific basis underlying the immersive features of XR. To make this literature more accessible to the visualization community, we first describe the basics of the visual system, highlighting how visual features are combined to objects and processed in higher cortical areas with a special focus on depth vision. Based on state of the art findings in neuroscience literature related to depth perception, we provide several recommendations for developers and designers. Our aim is to aid development of XR applications and strengthen development of tools aimed at molecular visualization, medical education, and surgery, as well as inspire new application areas
    corecore